AIbase
Home
AI Tools
AI Models
MCP
AI NEWS
EN
Model Selection
Tags
4-bit quantization compression

# 4-bit quantization compression

Molmo 7B D Bnb 4bit
Apache-2.0
Molmo-7B-D is a large language model quantized with BnB 4-bit. The model size is reduced from 30GB to 7GB, and the video memory requirement is reduced to about 12GB.
Large Language Model Transformers
M
cyan2k
1,994
17
Featured Recommended AI Models
AIbase
Empowering the Future, Your AI Solution Knowledge Base
English简体中文繁體中文にほんご
© 2025AIbase